Error Estimates for Multi-Penalty Regularization under General Source Condition

نویسندگان

  • Abhishake Rastogi
  • Dhinaharan Nagamalai
چکیده

In learning theory, the convergence issues of the regression problem are investigated with the least square Tikhonov regularization schemes in both the RKHS-norm and the L -norm. We consider the multi-penalized least square regularization scheme under the general source condition with the polynomial decay of the eigenvalues of the integral operator. One of the motivation for this work is to discuss the convergence issues for widely considered manifold regularization scheme. The optimal convergence rates of multi-penalty regularizer is achieved in the interpolation norm using the concept of effective dimension. Further we also propose the penalty balancing principle based on augmented Tikhonov regularization for the choice of regularization parameters. The superiority of multi-penalty regularization over single-penalty regularization is shown using the academic example and moon data set.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Manifold learning via Multi-Penalty Regularization

Manifold regularization is an approach which exploits the geometry of the marginal distribution. The main goal of this paper is to analyze the convergence issues of such regularization algorithms in learning theory. We propose a more general multi-penalty framework and establish the optimal convergence rates under the general smoothness assumption. We study a theoretical analysis of the perform...

متن کامل

Manifold regularization based on Nystr{ö}m type subsampling

In this paper, we study the Nyström type subsampling for large scale kernel methods to reduce the computational complexities of big data. We discuss the multi-penalty regularization scheme based on Nyström type subsampling which is motivated from well-studied manifold regularization schemes. We develop a theoretical analysis of multi-penalty least-square regularization scheme under the general ...

متن کامل

ar X iv : 1 61 1 . 01 90 0 v 1 [ st at . M L ] 7 N ov 2 01 6 Optimal rates for the regularized learning algorithms under general source condition

We consider the learning algorithms under general source condition with the polynomial decay of the eigenvalues of the integral operator in vector-valued function setting. We discuss the upper convergence rates of Tikhonov regularizer under general source condition corresponding to increasing monotone index function. The convergence issues are studied for general regularization schemes by using...

متن کامل

Optimal Rates for the Regularized Learning Algorithms under General Source Condition

We consider the learning algorithms under general source condition with the polynomial decay of the eigenvalues of the integral operator in vector-valued function setting. We discuss the upper convergence rates of Tikhonov regularizer under general source condition corresponding to increasing monotone index function. The convergence issues are studied for general regularization schemes by using...

متن کامل

Error estimates for joint Tikhonov- and Lavrentiev-regularization of constrained control problems

We consider joint Tikhonovand Lavrentiev-regularization of control problems with pointwise controland state-constraints. We derive error estimates for the error which is introduced by the Tikhonov regularization. With the help of this results we show, that if the solution of the unconstrained problem has no active constraints, the same holds for the Tikhonov-regularized solution if the regulari...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2017